skip to main content


Search for: All records

Creators/Authors contains: "Mondelli, Marco"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Autoencoders are a popular model in many branches of machine learning and lossy data com- pression. However, their fundamental limits, the performance of gradient methods and the features learnt during optimization remain poorly under- stood, even in the two-layer setting. In fact, earlier work has considered either linear autoencoders or specific training regimes (leading to vanish- ing or diverging compression rates). Our paper addresses this gap by focusing on non-linear two- layer autoencoders trained in the challenging pro- portional regime in which the input dimension scales linearly with the size of the representation. Our results characterize the minimizers of the pop- ulation risk, and show that such minimizers are achieved by gradient methods; their structure is also unveiled, thus leading to a concise descrip- tion of the features obtained via training. For the special case of a sign activation function, our analysis establishes the fundamental limits for the lossy compression of Gaussian sources via (shal- low) autoencoders. Finally, while the results are proved for Gaussian data, numerical simulations on standard datasets display the universality of the theoretical predictions. 
    more » « less
    Free, publicly-accessible full text available July 12, 2024
  2. null (Ed.)
  3. null (Ed.)